sparse code
- South America > Peru (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > New Mexico (0.04)
- (2 more...)
- Asia > China (0.05)
- North America > United States > West Virginia (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > Canada > Quebec > Montreal (0.04)
Semi-Unified Sparse Dictionary Learning with Learnable Top-K LISTA and FISTA Encoders
Lin, Fengsheng, Yan, Shengyi, Tran, Trac Duy
We present a semi-unified sparse dictionary learning framework that bridges the gap between classical sparse models and modern deep architectures. Specifically, the method integrates strict Top-$K$ LISTA and its convex FISTA-based variant (LISTAConv) into the discriminative LC-KSVD2 model, enabling co-evolution between the sparse encoder and the dictionary under supervised or unsupervised regimes. This unified design retains the interpretability of traditional sparse coding while benefiting from efficient, differentiable training. We further establish a PALM-style convergence analysis for the convex variant, ensuring theoretical stability under block alternation. Experimentally, our method achieves 95.6\% on CIFAR-10, 86.3\% on CIFAR-100, and 88.5\% on TinyImageNet with faster convergence and lower memory cost ($<$4GB GPU). The results confirm that the proposed LC-KSVD2 + LISTA/LISTAConv pipeline offers an interpretable and computationally efficient alternative for modern deep architectures.
- South America > Peru (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > New Mexico (0.04)
- (2 more...)
Personalized Federated Dictionary Learning for Modeling Heterogeneity in Multi-site fMRI Data
Zhang, Yipu, Zhang, Chengshuo, Zhou, Ziyu, Qu, Gang, Zheng, Hao, Wang, Yuping, Shen, Hui, Deng, Hongwen
Data privacy constraints pose significant challenges for large-scale neuroimaging analysis, especially in multi-site functional magnetic resonance imaging (fMRI) studies, where site-specific heterogeneity leads to non-independent and identically distributed (non-IID) data. These factors hinder the development of generalizable models. To address these challenges, we propose Personalized Federated Dictionary Learning (PFedDL), a novel federated learning framework that enables collaborative modeling across sites without sharing raw data. PFedDL performs independent dictionary learning at each site, decomposing each site-specific dictionary into a shared global component and a personalized local component. The global atoms are updated via federated aggregation to promote cross-site consistency, while the local atoms are refined independently to capture site-specific variability, thereby enhancing downstream analysis. Experiments on the ABIDE dataset demonstrate that PFedDL outperforms existing methods in accuracy and robustness across non-IID datasets.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.05)
- North America > United States > Virginia (0.04)
- Europe > Greece > Attica > Athens (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
From superposition to sparse codes: interpretable representations in neural networks
Klindt, David, O'Neill, Charles, Reizinger, Patrik, Maurer, Harald, Miolane, Nina
Understanding how information is represented in neural networks is a fundamental challenge in both neuroscience and artificial intelligence. Despite their nonlinear architectures, recent evidence suggests that neural networks encode features in superposition, meaning that input concepts are linearly overlaid within the network's representations. We present a perspective that explains this phenomenon and provides a foundation for extracting interpretable representations from neural activations. Our theoretical framework consists of three steps: (1) Identifiability theory shows that neural networks trained for classification recover latent features up to a linear transformation. (2) Sparse coding methods can extract disentangled features from these representations by leveraging principles from compressed sensing. (3) Quantitative interpretability metrics provide a means to assess the success of these methods, ensuring that extracted features align with human-interpretable concepts. By bridging insights from theoretical neuroscience, representation learning, and interpretability research, we propose an emerging perspective on understanding neural representations in both artificial and biological systems. Our arguments have implications for neural coding theories, AI transparency, and the broader goal of making deep learning models more interpretable.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > Santa Barbara County > Santa Barbara (0.14)
- Asia > Japan > Honshū > Tōhoku > Iwate Prefecture > Morioka (0.05)
- (12 more...)
Review for NeurIPS paper: Neural Sparse Representation for Image Restoration
Weaknesses: The paper is rather weak on the theoretical side of sparsity and the existing work. The paper claims in the introduction that "sparsity of hidden representation in deep neural networks cannot be solved by iterative optimization as sparse coding". I do not understand this claims since algorithms such as LISTA do compute sparse coding from few layers in deep networks. The fact that sparsity is needed to do denoising, compression or inverse problems is well understood independantly from neural networks and result from work carried by may researchers such as Donoho between 1995 and 2005. I do n \ot understand why they say that such sparsity can not be implemented given that a ReLU is the proximal operator of a positive l1 sparse coder, that many algorithms implement a sparse code with such architectures, and that such architectures with ReLU get very good performance for denoising and inverse problems as shown by "Convolutional Neural Networks for Inverse Problems in Imaging: A Review" published in 2017, and much more work has been done so far.
On Design Choices in Similarity-Preserving Sparse Randomized Embeddings
Kleyko, Denis, Rachkovskij, Dmitri A.
Expand & Sparsify is a principle that is observed in anatomically similar neural circuits found in the mushroom body (insects) and the cerebellum (mammals). Sensory data are projected randomly to much higher-dimensionality (expand part) where only few the most strongly excited neurons are activated (sparsify part). This principle has been leveraged to design a FlyHash algorithm that forms similarity-preserving sparse embeddings, which have been found useful for such tasks as novelty detection, pattern recognition, and similarity search. Despite its simplicity, FlyHash has a number of design choices to be set such as preprocessing of the input data, choice of sparsifying activation function, and formation of the random projection matrix. In this paper, we explore the effect of these choices on the performance of similarity search with FlyHash embeddings. We find that the right combination of design choices can lead to drastic difference in the search performance.
- Europe > Ukraine > Kyiv Oblast > Kyiv (0.14)
- Europe > Sweden > Örebro County > Örebro (0.04)
- Europe > Sweden > Norrbotten County > Luleå (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
Efficient Reinforcement Learning for Optimal Control with Natural Images
Reinforcement learning solves optimal control and sequential decision problems widely found in control systems engineering, robotics, and artificial intelligence. This work investigates optimal control over a sequence of natural images. The problem is formalized, and general conditions are derived for an image to be sufficient for implementing an optimal policy. Reinforcement learning is shown to be efficient only for certain types of image representations. This is demonstrated by developing a reinforcement learning benchmark that scales easily with number of states and length of horizon, and has optimal policies that are easily distinguished from suboptimal policies. Image representations given by overcomplete sparse codes are found to be computationally efficient for optimal control, using fewer computational resources to learn and evaluate optimal policies. For natural images of fixed size, representing each image as an overcomplete sparse code in a linear network is shown to increase network storage capacity by orders of magnitude beyond that possible for any complete code, allowing larger tasks with many more states to be solved. Sparse codes can be generated by devices with low energy requirements and low computational overhead.